Picture for Pietro Barbiero

Pietro Barbiero

Università della Svizzera Italiana and University of Cambridge

If Concept Bottlenecks are the Question, are Foundation Models the Answer?

Add code
Apr 29, 2025
Viaarxiv icon

Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts

Add code
Apr 24, 2025
Viaarxiv icon

Logic Explanation of AI Classifiers by Categorical Explaining Functors

Add code
Mar 20, 2025
Viaarxiv icon

Deferring Concept Bottleneck Models: Learning to Defer Interventions to Inaccurate Experts

Add code
Mar 20, 2025
Viaarxiv icon

Causally Reliable Concept Bottleneck Models

Add code
Mar 06, 2025
Viaarxiv icon

Neural Interpretable Reasoning

Add code
Feb 17, 2025
Viaarxiv icon

A Survey on Federated Learning in Human Sensing

Add code
Jan 07, 2025
Viaarxiv icon

Interpretable Concept-Based Memory Reasoning

Add code
Jul 22, 2024
Figure 1 for Interpretable Concept-Based Memory Reasoning
Figure 2 for Interpretable Concept-Based Memory Reasoning
Figure 3 for Interpretable Concept-Based Memory Reasoning
Figure 4 for Interpretable Concept-Based Memory Reasoning
Viaarxiv icon

Self-supervised Interpretable Concept-based Models for Text Classification

Add code
Jun 20, 2024
Figure 1 for Self-supervised Interpretable Concept-based Models for Text Classification
Figure 2 for Self-supervised Interpretable Concept-based Models for Text Classification
Figure 3 for Self-supervised Interpretable Concept-based Models for Text Classification
Figure 4 for Self-supervised Interpretable Concept-based Models for Text Classification
Viaarxiv icon

Causal Concept Embedding Models: Beyond Causal Opacity in Deep Learning

Add code
May 28, 2024
Figure 1 for Causal Concept Embedding Models: Beyond Causal Opacity in Deep Learning
Figure 2 for Causal Concept Embedding Models: Beyond Causal Opacity in Deep Learning
Figure 3 for Causal Concept Embedding Models: Beyond Causal Opacity in Deep Learning
Figure 4 for Causal Concept Embedding Models: Beyond Causal Opacity in Deep Learning
Viaarxiv icon